The Closing Conference of Embodied Semiotics

Monday 10 and Tuesday 11 February 2025

Information

The Closing Conference of ‘Embodied Semiotics: Understanding Gesture and Sign Language in Language Interaction’ will be held on 10th and 11th February, 2025.

Dates: Monday 10 and Tuesday 11 February 2025
Venue:National Institute of Informatics, 12F conference room (1208/1210) (+ hybrid format using Zoom)
Languages: Japanese (JPN), Japanese Sign Language (JSL), International Sign Language (IS) (the primary language of each session in [ ])
Participation: free of charge
Registration:Registration for both in-person and online sessions has reached capacity and is now closed.

Invited Lectures

Dr. Maartje De Meulder

HU-University of Applied Sciences Utrecht/Heriot-Watt University

Deaf lives, access, and agency: rethinking communication in the age of AI

Details

Prof. Kearsy Cormier

DCAL–Deafness Cognition and Language Research Centre, University College London

AI technologies with sign languages: Moving beyond conventionalised signs

Details

Prof. Sotaro Kita

University of Warwick

Gesture, metaphor, and spatial language

Details

Program

 

10 February

Pre-event: Report on the research results of the UK and Japan Cross-Signing Project [IS] (Interpreters:on-site JSL, JPN/online JSL, JPN)
10:00-10:10 Opening remarks (Yutaka Osugi)
10:10-10:40 Project overview: Robert Adam (Heriot-Watt University)
10:40-11:10 Research report 1: Tomohiro Okada
11:10-11:40 Research report 2: Sujit Sahasrabudhe (Heriot-Watt University)
11:40-12:00 Commentary (Maartje De Meulder, HU-University of Applied Sciences Utrecht/Heriot-Watt University)

Embodied Semiotics Closing Conference
Part 1: Multimodal Interaction Studies with AI on our side[JPN] (Interpretation:on-site JSL, IS/online JSL)

13:30-13:50 Overview (Kouhei Kikuchi)
13:50-15:30 Presentation 1: ‘Overview and the case of pointing in JSL Corpus’ (Tomohiro Okada)
Presentation 2: ‘Overview and the case of pointing in SC corpus’ (Rui Sakaida)
Presentation 3: ‘Overview of detection of pointing by using AI’ (Junwen Mo, Minh-Duc Vo, Hideki Nakayama)
Presentation 4: ‘Indexes and AI recognition: from the viewpoint of semiotics’ (Takeshi Enomoto)
15:30-15:50 Break (20 min.)
15:50-17:20 Panel discussion (designated discussant: Sotaro Kita, University of Warwick)
18:00 Reception

11 February

Embodied Semiotics Closing Conference
Part 2: Research Report [JPN](Interpretation:on-site JSL, IS/online JSL)

10:00-10:15 Opening remarks (Mayumi Bono)
10:15-10:35 A01 Report on the results ‘Annotation design based on semiotics’ (Koji Inoue, Katsuya Takanashi)
10:35-10:55 A02 Report on the results ‘Japanese Sign Language Corpus of Daily Conversation’ (Mayumi Bono, Tomohiro Okada)
10:55-11:05 Break (10min.)
11:05-11:25 A03 Report on the results ‘YouTube as a resource for LLM’ (Hideki Nakayama, Minh-Duc Vo, Junwen Mo)
11:25-11:45 B01 Report on the results ‘Tracrin’ (Ryosaku Makino)
11:45-12:00 Commentary by external advisors (Yasuhiro Den)
12:00-13:00 Lunch break (60 min)

Embodied Semiotics Closing Conference
Part 3: Invited Lectures [JPN/IS] (Interpreters:on-site JSL/online JSL)

13:00-14:00Invited Lecture 1: ‘Deaf lives, access, and agency: rethinking communication in the age of AI’ (Dr. Maartje De Meulder, HU-University of Applied Sciences Utrecht/Heriot-Watt University)(IS)
14:00-15:00Invited Lecture 2: ‘AI technologies with sign languages: Moving beyond conventionalised signs’ (Prof. Kearsy Cormier, DCAL-Deafness Cognition and Language Research Centre, University College London)(IS)
15:00-15:15 Break (15 min)
15:15-16:15Invited Lecture 3: ‘Gesture, metaphor, and spatial language’ (Prof. Sotaro Kita, University of Warwick)(JPN)
16:15-16:20 Closing

Invited Lectures

Dr. Maartje De Meulder

HU–University of Applied Sciences Utrecht/Heriot-Watt University

Title : Deaf lives, access, and agency: rethinking communication in the age of AI

Abstract : The rapid advancement of AI-driven language technologies, including generative AI tools, is transforming how deaf people navigate communication. Tools like real-time captions and speech recognition facilitate access, while emerging communicative agents such as conversational AI systems and experimental signing avatars act as communicative agents across multiple modalities and degrees of embodiment, from text-based interfaces to human-like robots. Meanwhile, human sign language interpreting, a cornerstone of linguistic access, faces systemic pressures across Western societies due to supply-demand imbalances and heavily reliance on interpreters. This presentation explores the implications of these shifts, including the creation of new access hierarchies that privilege some deaf users while marginalizing others. It examines how technoableism and technochauvinism in technology development can overestimate AI capabilities, risking harm through compromised access and the erosion of hard-won accessibility rights. These techologies also reshape language ideologies, and redefine the role of sign language interpreting as a profession. Additionally, the presentation addresses systemic biases in the field of sign language AI and the need for sustainable, equitable research practices, centering deaf leadership so that impactful design decisions are made by those with a larger stake.

BIO : Dr. Maartje De Meulder is a senior researcher at the University of Applied Sciences Utrecht and a leading scholar at the intersection of Deaf Studies, Disability Studies, Sign Language Interpreting Studies and Human-Computer Interaction. Her interdisciplinary work addresses key societal challenges facing deaf communities, with a current focus on the ethics of AI-driven language technologies and their impact on deaf communication and access practices. As one of the few deaf researchers working on sign language technologies and AI, she has pioneered ethical and responsible approaches to their development, critically examining their socio-political implications. Her research bridges traditionally siloed fields, focusing on the intersections of language, technology and disability, while advocating for deaf leadership and sustainable, equitable research practices. Dr. De Meulder's work is widely published, and she actively supports the capacity building of deaf schoalrs through the Dr Deaf workshops.


Prof. Kearsy Cormier

DCAL–Deafness Cognition and Language Research Centre, University College London

Title : AI technologies with sign languages: Moving beyond conventionalised signs

Abstract : Sign languages are rich visual languages, comprising many different types of signs. These include lexical signs similar to words in spoken languages, the kinds of signs one might find in dictionaries, and also fingerspelling via manual representation of manual letters or characters – both of which are highly conventionalised in form and meaning. But a high proportion of any sign language production involves partly lexical or non-lexical signs – including pointing signs, depicting/classifier signs, and manual enactments (constructed action). Work on AI with sign languages has focused in large part on the conventionalised aspects of sign formation, particularly lexical signs and fingerspelling – in large part because these are the easiest to translate to/from a spoken/written language e.g. via dictionary lookups, with minimal context. Partly/non-lexicalised signs pose a much bigger challenge with AI because they vary so much in form and require considerable context to be interpreted. This presentation will explain these challenges – both for manual signs and nonmanual expressions - and consider what is needed to address them.

BIO : Kearsy Cormier is Professor of Sign Linguistics at in the Department of Linguistics within the Division of Psychology and Language Sciences at University College London. She is also Director of the UCL Deafness Cognition and Language Research Centre, the largest research centre in Europe focusing on deaf people's communication and cognition. Her work in sign language linguistics is broad and covers all areas of major subfields of linguistics. She is director of the national British Sign Language Corpus and BSL Signbank, and using these and other resources is working with computer scientists towards automatic translation between signed and spoken languages.


Prof. Sotaro Kita

University of Warwick

Title : Gesture, metaphor, and spatial language

Abstract : I will discuss how co-speech (i.e., speech-accompanying) gestures relate to language and conceptualisation underlying language. I will focus on “representational gestures”, which can depict motion, action, and shape or can indicate locations (i.e., “iconic” and “deictic” gestures in McNeill's 1992 classification). I will provide evidence for the following two points. Various aspects of language shape co-speech gestures. Conversely, the way we produce co-speech gestures can shape language. I will discuss these issues in relation to manner and path in motion event descriptions, clause-linkage types in complex event descriptions, and metaphor. I will conclude that gesture and language are parts of a "conceptualisation engine”, which takes advantage of unique strengths of spatio-motoric representation and linguistic representation.

BIO : Sotaro Kita is Professor of Psychology of Language at the University of Warwick, in the UK. After a bachelor and a master's degree in mathematical engineering and information engineering from University of Tokyo. He obtained a PhD in linguistics and psychology from the University of Chicago, in 1993. He established and led the "Gesture Project" at Max Planck Institute for Psycholinguistics, in the Netherlands, from 1993 to 2003. Then, he held a faculty position at University of Bristol and University of Birmingham, before joining University of Warwick. He served as the President of the International Society of Gesture Studies and also as the Editor of the journal, GESTURE. He has investigated how gesture relates to various aspects of language: motion events (Kita & Özyürek, 2003, Journal of Memory and Language) and metaphor (Argyriou, Mohr & Kita, 2017, JEP: Learning, Memory and Cognition). He has also investigated how gesture varies cross-culturally (Kita, 2009, Language and Cognitive Processes), how gesture production shapes gesturer's conceptualization (Kita, Alibali, & Chu, 2017, Psychological Review) and how deaf Nicaraguan children turned gesture into an emerging sign language (Senghas, Kita & Özyürek, 2004, Science). His current research topics include how children's word learning can be facilitated by sound symbolism and gesture, how communicative contexts influence infants' and adults' gesture production, and how “silent gestures” show language-like features.


Organised by: Japan Society for the Promotion of Science (JSPS): Grants-in-Aid for Transformative Research Areas (B) ‘Embodied Semiotics: Understanding Gesture and Sign Language in Language Interaction’ (Area Representative: Mayumi Bono)
https://research.nii.ac.jp/EmSemi/

Co-organiser: Japan Society for the Promotion of Science (JSPS): International Joint Research Programme (JRP-LEAD with UKRI) ‘Understanding cross-signing phenomena in video conferencing situations during and post-COVID-19 in rural areas’ (PI: Mayumi Bono)
https://www.cross-sign.nii.ac.jp